36 research outputs found

    Boolean-controlled systems via receding horizon and linear programing.

    Get PDF
    We consider dynamic systems controlled by boolean signals or decisions. We show that in a number of cases, the receding horizon formulation of the control problem can be solved via linear programing by relaxing the binary constraints on the control. The idea behind our approach is conceptually easy: a feasible control can be forced by imposing that the boolean signal is set to one at least one time over the horizon. We translate this idea into constraints on the controls and analyze the polyhedron of all feasible controls. We specialize the approach to the stabilizability of switched and impulsively controlled systems

    Full information H-infinity-control for discrete-time infinite Markov jump parameter systems

    No full text
    In this paper we consider the full information discrete-time H-infinity-control problem for the class of linear systems with Markovian jumping parameters. The state-space of the Markov chain is assumed to take values in a countably infinite set. Full information here means that the controller has access to both the state-variables and jump-variables. A necessary and sufficient condition for the existence of a feedback controller that makes the l(2)-induced norm of the system less than a prespecified bound is obtained. This condition is written in terms of a set of infinite coupled algebraic Riccati equations. (C) 1996 Academic Press, Inc.202257860

    Jump LQ-optimal control for discrete-time Markovian systems with stochastic inputs

    No full text
    In this paper we consider the discrete-time LQ-optimal control problem for the class of linear systems with Markovian jump parameters and additive Ct-stochastic input. The state-space of the Markov chain is assumed to be a countably infinite set. The controller has access to both the state-variable and jump-variable. It is shown that the optimal control law is characterized by a feedback term plus a term defined-by the l(2)-stochastic input and Markov chain. An application to the optimal control of a failure prone manufacturing system subject to a random demand for a single type of item is presented.16584385

    A convex programming approach to H-2 control of discrete-time Markovian jump linear systems

    No full text
    In this paper we consider the H-2-control problem for the class of discrete-time linear systems with parameters subject to markovian jumps using a convex programming approach. We generalize the definition of the H-2 norm from the deterministic case to the markovian jump case and set a link between this norm and the observability and controllability gramians. Conditions for existence and derivation of a mean square stabilizing controller for a markovian jump linear system using convex analysis are established. The main contribution of the paper is to provide a convex programming formulation to the H-2-control problem, so that several important cases, to our knowledge not analysed in previous work, can be addressed. Regarding the transition matrix P = [p(ij)] for the Markov chain, two situations are considered: the case in which it is exactly known, and the case in which it is not exactly known but belongs to an appropriated convex set. Regarding the state variable and the jump variable, the cases in which they may or may not be available to the controller are considered. If they are not available, the H-2-control problem can be written as an optimization problem over the intersection of a convex set and a set defined by nonlinear real-valued functions. These nonlinear constraints exhibit important geometrical properties, leading to cutting-plane-like algorithms. The theory is illustrated by numerical simulations.66455757

    Solutions for the linear-quadratic control problem of Markov jump linear systems

    No full text
    The paper is concerned with recursive methods for obtaining the stabilizing solution of coupled algebraic Riccati equations arising in the linear-quadratic control of Markovian jump linear systems by solving at each iteration uncoupled algebraic Riccati equations. It is shown that the new updates carried out at each iteration represent approximations of the original control problem by control problems with receding horizon, for which some sequences of stopping times define the terminal time. Under this approach, unlike previous results, no initialization conditions are required to guarantee the convergence of the algorithms. The methods can be ordered in terms of number of iterations to reach convergence, and comparisons with existing methods in the current literature are also presented. Also, we extend and generalize current results in the literature for the existence of the mean-square stabilizing solution of coupled algebraic Riccati equations.103228331

    Output feedback control of Markov jump linear systems in continuous-time

    No full text
    This paper addresses the dynamic output feedback control problem of continuous-time Markovian jump linear systems. The fundamental point in the analysis is an LMI characterization, comprising all dynamical compensators that stabilize the closed-loop system in the mean square sense. The H-2 and H-infinity-norm control problems are studied, and the H-2 and H-infinity filtering problems are solved as a by product.45594494
    corecore